329 research outputs found

    Collaborative damage mapping for emergency response : the role of Cognitive Systems Engineering

    Get PDF
    Remote sensing is increasingly used to assess disaster damage, traditionally by professional image analysts. A recent alternative is crowdsourcing by volunteers experienced in remote sensing, using internet-based mapping portals. We identify a range of problems in current approaches, including how volunteers can best be instructed for the task, ensuring that instructions are accurately understood and translate into valid results, or how the mapping scheme must be adapted for different map user needs. The volunteers, the mapping organizers, and the map users all perform complex cognitive tasks, yet little is known about the actual information needs of the users. We also identify problematic assumptions about the capabilities of the volunteers, principally related to the ability to perform the mapping, and to understand mapping instructions unambiguously. We propose that any robust scheme for collaborative damage mapping must rely on Cognitive Systems Engineering and its principal method, Cognitive Task Analysis (CTA), to understand the information and decision requirements of the map and image users, and how the volunteers can be optimally instructed and their mapping contributions merged into suitable map products. We recommend an iterative approach involving map users, remote sensing specialists, cognitive systems engineers and instructional designers, as well as experimental psychologists

    Towards post-disaster debris identification for precise damage and recovery assessments from uav and satellite images

    Get PDF

    Satellite remote sensing for near-real time data collection

    Get PDF

    Remote sensing-based proxies for urban disaster risk management and resilience: A review

    Full text link
    © 2018 by the authors. Rapid increase in population and growing concentration of capital in urban areas has escalated both the severity and longer-term impact of natural disasters. As a result, Disaster Risk Management (DRM) and reduction have been gaining increasing importance for urban areas. Remote sensing plays a key role in providing information for urban DRM analysis due to its agile data acquisition, synoptic perspective, growing range of data types, and instrument sophistication, as well as low cost. As a consequence numerous methods have been developed to extract information for various phases of DRM analysis. However, given the diverse information needs, only few of the parameters of interest are extracted directly, while the majority have to be elicited indirectly using proxies. This paper provides a comprehensive review of the proxies developed for two risk elements typically associated with pre-disaster situations (vulnerability and resilience), and two post-disaster elements (damage and recovery), while focusing on urban DRM. The proxies were reviewed in the context of four main environments and their corresponding sub-categories: built-up (buildings, transport, and others), economic (macro, regional and urban economics, and logistics), social (services and infrastructures, and socio-economic status), and natural. All environments and the corresponding proxies are discussed and analyzed in terms of their reliability and sufficiency in comprehensively addressing the selected DRM assessments. We highlight strength and identify gaps and limitations in current proxies, including inconsistencies in terminology for indirect measurements. We present a systematic overview for each group of the reviewed proxies that could simplify cross-fertilization across different DRM domains and may assist the further development of methods. While systemizing examples from the wider remote sensing domain and insights from social and economic sciences, we suggest a direction for developing new proxies, also potentially suitable for capturing functional recovery

    Towards Learning Low-Light Indoor Semantic Segmentation with Illumination-Invariant Features

    Get PDF
    Semantic segmentation models are often affected by illumination changes, and fail to predict correct labels. Although there has been a lot of research on indoor semantic segmentation, it has not been studied in low-light environments. In this paper we propose a new framework, LISU, for Low-light Indoor Scene Understanding. We first decompose the low-light images into reflectance and illumination components, and then jointly learn reflectance restoration and semantic segmentation. To train and evaluate the proposed framework, we propose a new data set, namely LLRGBD, which consists of a large synthetic low-light indoor data set (LLRGBD-synthetic) and a small real data set (LLRGBD-real). The experimental results show that the illumination-invariant features effectively improve the performance of semantic segmentation. Compared with the baseline model, the mIoU of the proposed LISU framework has increased by 11.5%. In addition, pre-training on our synthetic data set increases the mIoU by 7.2%. Our data sets and models are available on our project website
    • …
    corecore